Back to Resources
Papers

DeepSeek OCR

Author: Haoran Wei, Yaofeng Sun, Yukun Li

Author: Haoran Wei, Yaofeng Sun, Yukun Li

Date: October 21, 2025

Link: https://arxiv.org/abs/2510.18234

DeepSeek-OCR: Contexts Optical Compression

OCR and Document Understanding
OCR and Document Understanding

Introduction

DeepSeek-OCR presents a groundbreaking investigation into compressing long contexts through optical 2D mapping. This innovative approach addresses one of the most pressing challenges in modern large language models: efficiently processing and retaining long document contexts.

The Context Compression Challenge

Large language models face significant challenges when dealing with long documents:

  • Token Limit Constraints: Most LLMs have maximum context window sizes
  • Computational Cost: Processing thousands of tokens is expensive
  • Memory Requirements: Storing long contexts requires substantial memory
  • Attention Complexity: Attention mechanisms scale quadratically with sequence length

DeepSeek-OCR tackles these challenges by introducing a novel optical compression approach.

Architecture Overview

DeepSeek-OCR consists of two main components:

1. DeepEncoder

The DeepEncoder serves as the core compression engine with unique design goals:

  • Low Activation: Maintains minimal activations even with high-resolution input
  • High Compression Ratios: Achieves significant reduction in token count
  • Optimal Token Management: Ensures a manageable number of vision tokens

2. DeepSeek3B-MoE-A570M Decoder

The decoder component processes the compressed visual representations to extract text with high accuracy.

The complete pipeline:

Text Document → 2D Visual Mapping → DeepEncoder → Vision Tokens → Decoder → Extracted Text

Optical 2D Mapping: A Novel Approach

The key innovation is mapping text into 2D visual space:

class OpticalCompressor:
    def __init__(self, compression_ratio, image_resolution):
        """
        Optical compression of text documents

        Args:
            compression_ratio: Target compression ratio (e.g., 10x, 20x)
            image_resolution: Resolution of 2D mapping
        """
        self.compression_ratio = compression_ratio
        self.resolution = image_resolution
        self.encoder = DeepEncoder()

    def compress_document(self, text_document):
        # Convert text to 2D visual representation
        visual_repr = self.text_to_2d_image(text_document)

        # Encode with DeepEncoder
        vision_tokens = self.encoder(visual_repr)

        # Ensure compression ratio is met
        num_vision_tokens = len(vision_tokens)
        original_tokens = len(tokenize(text_document))

        actual_ratio = original_tokens / num_vision_tokens

        return vision_tokens, actual_ratio

    def text_to_2d_image(self, text):
        """Convert text document to 2D visual representation"""
        # Render text as image
        image = render_text_to_image(text, self.resolution)
        return image

Performance Benchmarks

Compression Ratio vs. Accuracy

DeepSeek-OCR demonstrates impressive performance across different compression ratios:

Compression RatioOCR AccuracyUse Case
< 10x97%Production-ready, high-precision OCR
20x~60%Exploratory, memory-efficient processing

Key Finding: When the number of text tokens is within 10 times that of vision tokens (compression ratio < 10x), the model achieves 97% decoding precision.

Comparison with Existing Solutions

vs. GOT-OCR2.0

  • GOT-OCR2.0: 256 tokens per page
  • DeepSeek-OCR: 100 vision tokens per page
  • Result: Surpasses GOT-OCR2.0 with 60% fewer tokens

vs. MinerU2.0

  • MinerU2.0: 6000+ tokens per page on average
  • DeepSeek-OCR: < 800 vision tokens per page
  • Result: Outperforms while using 87% fewer tokens

Production Benchmarks: OmniDocBench

On the OmniDocBench dataset, DeepSeek-OCR demonstrates superior efficiency and performance, establishing new standards for document understanding systems.

Practical Applications

Large-Scale Training Data Generation

DeepSeek-OCR has exceptional practical value for production systems:

Throughput: 200,000+ pages per day
Hardware: Single A100-40G GPU
Application: Training data generation for LLMs/VLMs

This massive throughput enables:

  • Rapid dataset creation for large language models
  • Cost-effective document processing at scale
  • Efficient training of vision-language models

Production Deployment

class DeepSeekOCRPipeline:
    def __init__(self, model_path, device='cuda'):
        self.encoder = load_deepencoder(model_path)
        self.decoder = load_deepseek_decoder(model_path)
        self.device = device

    def process_document(self, document_path, max_vision_tokens=800):
        """
        Process a document with DeepSeek-OCR

        Args:
            document_path: Path to document (PDF, image, etc.)
            max_vision_tokens: Maximum vision tokens to generate

        Returns:
            Extracted text with metadata
        """
        # Load and prepare document
        document = load_document(document_path)

        # Compress to visual representation
        vision_tokens = self.encoder(
            document,
            max_tokens=max_vision_tokens
        )

        # Decode to text
        extracted_text = self.decoder(vision_tokens)

        # Calculate compression metrics
        original_size = estimate_token_count(document)
        compression_ratio = original_size / len(vision_tokens)

        return {
            'text': extracted_text,
            'vision_tokens': len(vision_tokens),
            'compression_ratio': compression_ratio,
            'accuracy_estimate': self.estimate_accuracy(compression_ratio)
        }

    def estimate_accuracy(self, compression_ratio):
        """Estimate OCR accuracy based on compression ratio"""
        if compression_ratio < 10:
            return 0.97
        elif compression_ratio < 20:
            return 0.60 + (20 - compression_ratio) * 0.037
        else:
            return 0.60

Research Implications

DeepSeek-OCR opens exciting research directions:

1. Historical Long-Context Compression

The optical compression approach shows considerable promise for:

  • Archival document processing: Efficiently handling historical texts
  • Context window extension: Enabling models to process longer documents
  • Multi-document reasoning: Compressing multiple documents into manageable contexts

2. Memory Forgetting Mechanisms

The compression and reconstruction process provides insights into:

  • Information prioritization: What information is retained at different compression ratios
  • Lossy compression effects: How compression affects downstream task performance
  • Memory dynamics: Understanding forgetting mechanisms in LLMs

3. Vision-Language Model Training

The ability to generate 200k+ pages per day enables:

  • Massive-scale datasets: Creating large-scale training corpora
  • Diverse document types: Processing various document formats
  • Quality-controlled data: High-accuracy OCR for reliable training data

Technical Deep Dive

DeepEncoder Architecture

The DeepEncoder is designed with several key constraints:

class DeepEncoderBlock:
    """
    DeepEncoder block optimized for high-resolution input
    and low activation
    """
    def __init__(self, in_channels, out_channels, compression_factor):
        self.conv_layers = nn.Sequential(
            # Efficient convolution with stride for compression
            nn.Conv2d(in_channels, out_channels,
                     kernel_size=3, stride=compression_factor),
            nn.BatchNorm2d(out_channels),
            nn.GELU(),

            # Attention mechanism for feature selection
            SpatialAttention(out_channels),

            # Further compression
            nn.Conv2d(out_channels, out_channels,
                     kernel_size=1, stride=1)
        )

    def forward(self, x):
        return self.conv_layers(x)

Maintaining Low Activations

Key design principles for low activation:

  1. Sparse Attention: Not all regions of the document image are equally important
  2. Progressive Compression: Multiple stages of compression with validation
  3. Adaptive Resolution: Adjust resolution based on document complexity

High Compression Ratio Achievement

Strategies for achieving high compression ratios:

Original Document (10,000 tokens)
    ↓ 2D Visual Mapping
Document Image (high resolution)
    ↓ DeepEncoder (compression)
Vision Tokens (500-1000 tokens)
    ↓ Compression Ratio
10x to 20x reduction

Open Source Availability

DeepSeek-OCR is publicly accessible:

Comparison with Traditional OCR

ApproachToken EfficiencyAccuracySpeedUse Case
Traditional OCRLow (1:1 mapping)High (>95%)FastIndividual documents
GOT-OCR2.0Medium (256/page)HighMediumBatch processing
MinerU2.0Low (6000+/page)HighSlowDetailed extraction
DeepSeek-OCRHigh (<800/page)High (97% @ <10x)Very FastLarge-scale production

Future Directions

Enhanced Compression Techniques

Research opportunities include:

  • Adaptive compression: Dynamically adjust compression based on content
  • Multi-modal fusion: Combine text and visual information
  • Hierarchical compression: Different compression levels for different document sections

Broader Applications

Potential extensions beyond OCR:

  • Video understanding: Applying optical compression to video frames
  • Multi-page reasoning: Processing entire books or long reports
  • Cross-lingual documents: Handling documents in multiple languages

Conclusion

DeepSeek-OCR represents a significant breakthrough in document understanding and context compression. By introducing optical 2D mapping and achieving compression ratios of 10-20x while maintaining high accuracy, it addresses critical challenges in processing long documents.

Key Achievements:

  • 97% accuracy at compression ratios below 10x
  • Outperforms existing solutions while using significantly fewer tokens
  • Production-ready with 200k+ pages/day throughput on single GPU
  • Open source enabling broader research and applications

The work demonstrates that visual compression can be a viable strategy for extending the effective context window of language models, with promising implications for memory mechanisms, efficient training data generation, and scalable document processing.

As large language models continue to evolve, approaches like DeepSeek-OCR will be essential for making these models more efficient, practical, and capable of handling real-world document understanding tasks at scale.


Citation:

@article{wei2024deepseekcr,
  title={DeepSeek-OCR: Contexts Optical Compression},
  author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
  journal={arXiv preprint arXiv:2510.18234},
  year={2024}
}